1 / 36

Fast IP Routing

Fast IP Routing. Axel Clauberg Consulting Engineer Cisco Systems Axel.Clauberg@cisco.com. Agenda. The Evolution of IP Routing Transmission Update: 10GE Router Architectures So, it‘s all just speed ?. The Evolution of IP Routing. Past. Heard around the corner ?.

kaipo
Télécharger la présentation

Fast IP Routing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast IP Routing Axel Clauberg Consulting Engineer Cisco Systems Axel.Clauberg@cisco.com

  2. Agenda • The Evolution of IP Routing • Transmission Update: 10GE • Router Architectures • So, it‘s all just speed ?

  3. The Evolution of IP Routing

  4. Past Heard around the corner ? • IP Routers are slow, sw-based • IP Routers cause high latency • IP Routers are undeterministic • IP Routers do not support QoS

  5. WAN Customer Access Speed Evolution • Late 1980s: 9.6 Kb/s .. 64 Kb/s • Early 1990s: 64 Kb/s .. 2 Mb/s • Late 1990s: 2 Mb/s .. 155 Mb/s • Early 2000s: 155 Mb/s .. 10 Gb/s

  6. Late 1980s: 56/64 Kb/s Early 1990s: 1.5/2 Mb/s Mid 1990s: 34 Mb/s, 155 Mb/s Late 1990s: 622 Mb/s, 2,5 Gb/s Early 2000s: 10 Gb/s, 40 Gb/s Late 1980s: 10 Mb/s Early 1990s: 100 Mb/s (FDDI) Mid 1990s: 155 Mb/s (ATM) Late 1990s: nx FE, 155 Mb/s, 622 Mb/s, GE Early 2000s: 10 Gb/s, n x 10 Gb/s Backbone Evolution WAN Campus

  7. Transmission Update: 10GE

  8. MAN/WAN IP Transport Alternatives IP over ATM IP over SDH IP over Optical IP over Ethernet GE 10GE B-ISDN Multiplexing, Protection and Management at every Layer IP IP ATM IP IP Ethernet SONET/SDH ATM SONET/SDH IP Optical Optical Optical Optical Optical Lower Cost and Overhead

  9. Ethernet Scaling History • 1981: Shared 10 Mbit 1x • 1992: Switched 10 Mbit 10x • 1995: Switched 100 Mbit 100X • 1998: Switched 1 Gigabit 1000X • 200x: Switched 10 Gigabit 10000X

  10. Moving the Decimal Point: 10 GbE Performanceand Scalability 10 Gbps Ethernet 10 Gbps STM-64 Gigabit EtherChannel • LAN applications • Metro applications • WAN applications 1 Gbps Gigabit Ethernet 10 GbE IEEE 802.3ae Standard Fast EtherChannel Fast Ethernet 100 Mbps 2001 2002 1996 1997 1998 1999 2000

  11. Why 10 Gigabit Ethernet • Aggregates Gigabit Ethernet segments • Scales Enterprise and Service Provider LAN backbones • Leverages installed base of 250 million Ethernet switch ports • Supports all services (packetized voice and video, data) • Supports metropolitan and wide area networks • Faster and simpler than other alternatives

  12. IEEE Goals for 10 GbE(Partial List) • Preserve 802.3 Ethernet frame format • Preserve minimum and maximum frame size of current 802.3 Ethernet • Support only full duplex operation • Support 10,000 Mbps at MAC interface • Define two families of PHYs • LAN PHY operating at 10 Gbps • Optional WAN PHY operating at a data rate compatible with the payload rate of OC-192c/SDH VC-4-64c

  13. 1999 2000 2001 2002 HSSG Formed PAR Drafted First Draft LMSC Ballot Working Group Ballot PAR Approved 802.3ae Formed Standard IEEE 802.3ae Task Force Milestones First 10GE deliveries HSSG= Higher Speed Study Group PAR= project authorization request 802.3ae= the name of the project and the name of the sub-committee of IEEE 802.3 chartered with writing the 10GbE Standard Working group ballot= task force submits complete draft to larger 802.3 committee for technical review and ballot LMSC: LAN/MAN Standards Committee ballot. Any member of the superset of 802 committees may vote and comment on draft

  14. 10 Gigabit Ethernet Media Goals Media Type Max Distance Type 1550 nm Laserextended reach std/dispersion free fiber 40-100 km 1300 nm Laserstandard reach single mode fiber 2-10 km 1300 nm LaserCDWM (4x2.5) multimode fiber 300 m 780 nm VCSELmultichannel ribbon multimode fiber 200 m

  15. IEEE Status • 802.3ae Meeting 10.-14. Juli 2000 • 75% Consensus • 1550nm Transceiver 40 Km @ SMF • 1300nm Transceiver 10 Km @ SMF • No Consensus yet • Multimode Support • 300m mit 62.5µ 160/500 Mhz*Km MM • 50µ 2000/500 MHz*Km MM

  16. Router Architectures

  17. Components • Memory Architecture • Interconnect • Forwarding Engine • Scalability • Stability • Queueing / QoS

  18. Basic Design Router Buffer Memory Interfaces ... ... Inputs Outputs • Data are of random sizes • Arrival is async, unpredictable, independantly on i/f • Data have to be buffered • TCP/IP traffic is bursty, but short-term congestion only Forwarding Engine Route Processor

  19. How much buffers ? • Rule of Thumb: RTT x BW (Villamizer & Song, High Performance TCP in ANSNET, 1994) • STM-16 @ 200 ms: ~ 60 MB buffering capacity

  20. How to Buffer ? • SRAM • Fast, Power-hungry, Density 8 Mb -> 16 Mb, Simple Controller Design • DRAM / SDRAM • Slower, Less Power, Density 64 Mb -> 256 Mb, Complex Controller Design

  21. Interconnect • Switch Fabric / Crossbar • Shared Memory • Variations

  22. Ingress Line Cards Egress Line Cards Line Card 0 Line Card 0 Switch Fabric Line Card 1 Line Card 1 Scheduler Line Card N Line Card N RP RP Switch Fabric / Crossbar • Packet forwarding decision done on each linecard • Ingress and Egress Buffering on Linecards • Possible Problem: Head of Line Blocking • Solution: VOQ

  23. Physical Layer (Optics) Layer 3 Engine Fabric Interface To Fabric RX Switch Fabric CPU From Fabric TX Scheduler Linecard in Detail • HOL Blocking can occur when packet cannot flow off transmit linecard • Packet will be buffered on receiving linecard • Packet blocks other packets to other linecards • Solution: Virtual Output Queues, one per egress linecard

  24. GSR Queuing Architecture Input Ports Transmit Line Card Output Ports Receive Line Card Virtual Output Queues Group of 8 CoS Queues Per Interface (M-DRR) Crossbar Switch Fabric W-RED DRR CAR CEF

  25. Shared Memory ArchitecturePhysically Centralized • One large memory system, data passing through it • Simple memory management • High speed memory • Simple Linecards • Needs SRAM for high speeds Line Cards 1-8 Interconnects & Forwarding Engine 2.5Gbps 2.5Gbps 40 G Memory Controller 2.5Gbps 2.5Gbps

  26. Line Cards 1-8 2.5Gbps 2.5Gbps 2.5Gbps 2.5Gbps Memory System Memory System Shared Memory ArchitectureDistributed • Memory distributed over linecards • Memory controller treats sum of pieces as shared memory • Packet forwarding decision in central engine(s) • Difficult to maximize interconnect efficiency • Egress line cards simply request packets from shared memory • Causes Head of Line (HOL) blocking and high latency, worsening under moderate-to-heavy system load or with multicast traffic Memory Controller & Forwarding Engine(s)

  27. Switch Fabric vs. Shared Memory • Shared Memory requires only half the buffer space • HOL Blocking in Shared Memory, especially for Multicast • Involvement of distributed shared memory causes more points of failure

  28. Forwarding Engine • Classifying the packet • IPv4, IPv6, MPLS, ... • Packet validity (TTL, length, ...) • Next Hop • Basic Statistics • Optional: • Policing, Extended Statistics, RPF check (security, Multicast), QoS, Tunnel, ... • Distributed vs. Central

  29. Central Forwarding ? • IP Longest match • Hash vs. TCAM vs. Tree Lookup • Tree Lookup requires high number of routing table lookups • Need SRAM • Danger to run out of SRAM • Forwarding speed dependant on depth of routing table

  30. Distributed Forwarding • One copy of forwarding info per linecard • Parallel processing without sync or communication between linecards • Able to use TCAMs and SDRAMs

  31. So, it’s all just speed ?

  32. So, it’s just speed ? • Services • IP Multicast • IP QoS • Security • IPv6 • MPLS • Manageability • Availability • Investment protection

  33. End Stations (hosts-to-routers): IGMP Switches (Layer 2 Optimization): IGMP Snooping Routers (Multicast Forwarding Protocol): PIM Sparse Mode Multicast routing across domains MBGP Multicast Source Discovery MSDP with PIM-SM ISP A ISP B MSDP RP Multicast Source Y Multicast Source X DR RP ISP B ISP A MBGP CGMP DR PIM-SM IGMP DR Multicast SolutionsEnd-to-End Architecture Interdomain Multicast Campus Multicast

  34. Summary • IP Routers have evolved during the past years • Line rate up to 10 Gb/s • Crossbar architectures with distributed forwarding seem to scale better than shared memory architectures • Services remain the most decisive factor

  35. Outlook • 10 Gb/s Interfaces supported in 2000 • 10GE, STM-64/OC-192 • High density of 10 Gb/s interfaces soon in a PoP • Next step will be STM-256/OC-768 = 40 Gb/s • Will these routers be „Palm-Size“ ? • Probably not...

  36. www.cisco.com

More Related