1 / 38

Extending the Internet Exchange to the Metropolitan Area

Extending the Internet Exchange to the Metropolitan Area. Keith Mitchell keith@linx.org Executive Chairman London Internet Exchange ISPcon, London 23rd February 1999. Mostly a Case Study. Background IXP Architectures & Technology LINX Growth Issues New LINX Switches LINX Second Site.

brasen
Télécharger la présentation

Extending the Internet Exchange to the Metropolitan Area

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extending the Internet Exchange tothe Metropolitan Area Keith Mitchell keith@linx.org Executive Chairman London Internet Exchange ISPcon, London 23rd February 1999

  2. Mostly a Case Study • Background • IXP Architectures& Technology • LINX Growth Issues • New LINX Switches • LINX Second Site

  3. What is the LINX ? • UK National IXP • Not-for-profit co-operative of ISPs • Main aim to keep UK domestic Internet traffic in UK • Increasingly keeping EU traffic in EU • Largest IXP in Europe

  4. LINX Status • Established Oct 94 by5 member ISPs • Now has 63 members • 7 FTE dedicated staff • Sub-contracts co-location to 2 neutral sites in London Docklands: • Telehouse • TeleCity • Traffic doubling every ~4 months !

  5. LINX Members

  6. LINX Members by Country

  7. Exchange Point History • Initially established in 1992 by: • MFS, Washington DC - “MAE-East” • Commercial Internet Exchange, Silicon Valley - “CIX-West” • Amsterdam, Stockholm, others soon afterwards • Now at least one in every European, G8, OECD etc country

  8. IXP Architectures • Initially: • 10baseT router to switch • FDDI between switches • commonly DEC Gigaswitches • More recently: • 100baseT between routers and switches • Cisco Catalyst 5000 popular

  9. IXP Technologies • 10Mbps Ethernet • 100Mbps Ethernet • FDDI • ATM • Gigabit Ethernet

  10. IXP Technologies -Ethernet • 10baseT is only really an option for small members with 1 or 2 E1 circuits and no servers at IXP site • all speeds of Ethernet will be present in ISP backbones for servers for some time to come

  11. IXP Technologies -100baseT • Cheap • Proven • Supports full duplex • Meets most non-US ISP switch port bandwidth requirements • Range limitations can be overcome using 100baseFL

  12. IXP Technologies - FDDI • Proven • Bigger 4k MTU • Dual-attached more resilient • Longer maximum distance • Full-duplex proprietary only

  13. IXP Technologies -ATM • Only used at US federally-sponsored NAPs, PARIX • Ameritech, PacBell, Sprint, Worldcom; FT • Initially serious deployment problems • “packet-shredding” led to poor bandwidth efficiency • Now >1Gbps traffic at NAPs

  14. IXP Technologies -ATM • Some advantages: • inter-member bandwidth limits • inter-member bandwidth measurement • “hard” enforcement of peering policy restrictions • But: • High per-port cost, especially for >155Mbps • Limited track record for IXP applications

  15. IXP Technologies -Gigabit Ethernet • Cost-effective and simple high bandwidth • Ideal to scale inter-switch links • Not good router vendor support yet • Standards very new • Highly promising for metropolitan and even longer distance links

  16. LINX Architecture • Originally Cisco Catalyst 1200s: • 10baseT to member routers • FDDI ring between switches • Until 98Q3: • Member primary connections by FDDI and 100baseT • Backup connections by 10baseT • FDDI and 100baseT inter-switch

  17. Old LINX Topology

  18. Old LINX Infrastructure • 5 Cisco Switches: • 2 x Catalyst 5000, 3 x Catalyst 1200 • 2 Plaintree switches • 2 x WaveSwitch 4800 • FDDI backbone • Switched FDDI ports • 10baseT & 100baseT ports • Media convertors for fibre ether (>100m)

  19. Growth Issues • Lack of space for new members • Exponential traffic growth • Bottleneck in inter-switch links • Needed to upgrade to Gigabit backbone within existing site 98Q3 • Nx100Mbps trunking does not scale (MAE problems)

  20. Statistics and looking glass at http://www2.linx.net/

  21. Switch Issues • Catalyst and Plaintree switches no longer in use • Catalyst 5000s appeared to have broadcast scaling issues regardless of Supervisor Engine • FDDI could no longer cope • Plaintree switches had proven too unstable and unmanageable • Catalyst 1200s at end of useful life

  22. LINX Growth Solutions • Find second site within 5km Gigabit Ethernet range via open tender • Secure diverse dark/dim fibre between sites from carriers • Upgrade switches to support Gigabit links between them • Do not offer Gigabit member connections yet

  23. LINX Growth Obstacles • Existing Telehouse site full until 99Q3 extension ready • Poor response to Q4 97 site ITT: • only 3 serious bidders • successful bidder pulled out after messing us around for 6 months :-( • Only two carriers were prepared and able to offer dark/dim fibre after months of discussions

  24. Gigabit Switch Options • Evaluated 6 vendors: • Cabletron/Digital, Cisco, Extreme, Foundry, Packet Engines, Plaintree • Some highly cost-effective options available • But needed non-blocking, modular, future-proof equipment, not workgroup boxes

  25. Metro Gigabit • No real MAN-distance fibre to test kit out on :-( • LINX member COLT kindly lent us a “big drum of fibre” • Most kit appears to work to at least 5km • Some interoperability issues with dim to dark management convertor boxes

  26. Telehouse • Located in London Docklands • on meridian line at 0º longitude ! • 24x7 manned, controlled access • Highly resilient infrastructure • Diverse SDH fibre from most UK carriers • Diverse power from national grid, multiple generators • Owned by consortium of Japanese banks, KDD, BT

  27. LINX and Telehouse • Telehouse is “co-locate” provider • computer and telecoms “hotel” • LINX is customer • About 100 ISPs are customers, including 50 LINX members • other members get space from LINX • Facilitates LAN interconnection

  28. LINX 2nd Site • Secured good deal with two carriers for diverse fibre • but only because LINX is special case • New ITT: • bid deadline mid-Aug 98 • 8 submissions • Awarded to TeleCity Sep 98

  29. LINX and TeleCity • TeleCity is new VC co-lo startup • sites in Manchester, London • London site 3 miles from Telehouse • Same LINX relationship as Telehouse • choice for members • Space for 800 customer racks • LINX has 16-rack suite

  30. New Infrastructure • Packet Engines PR-5200 • Chassis based 16 slot switch • Non-blocking 52Gbps backplane • Used for our core, primary switches • One in Telehouse, one in TeleCity • Will need a second one in Telehouse within this quarter • Supports 1000LX, 1000SX, FDDI and 10/100 ethernet

  31. New Infrastructure • Packet Engines PR-1000: • Small version of PR-5200 • 1U switch; 2x SX and 20x 10/100 • Same chipset as 5200 • Extreme Summit 48: • Used for second connections • Gives vendor resiliency • Excellent edge switch -low cost per port and • 2x Gigabit, 48x 10/100 ethernet

  32. New Infrastructure • Topology changes: • Aim to be able to have major failure in one switch without affecting member connectivity • Aim to have major failures on inter-switch links with out affecting connectivity • Ensure that inter-switch connections are not bottlenecks

  33. New backbone • All primary inter-switch links are now gigabit • New kit on order to ensure that all inter-switch links are gigabit • Inter-switch traffic minimised by keeping all primary and all backup traffic on their own switches

  34. Current Status • Old switches no longer in use • New Switches live since Dec 98 • TeleCity site has been running since Dec 98 • First in-service member connections at TeleCity soon • Capacity for up to 100x traffic growth

  35. IXP Switch Futures • Vendor claims of 1000baseProprietary 50km+ range are interesting • Need abuse prevention tools: • port filtering, RMON • Need traffic control tools: • member/member bandwidth limiting and measurement • What inter-switch technology will support Gigabit member connections ?

  36. Conclusions • Extending Gigabit beyond your LAN is hard, but not technically • Only worth trying if you have your own fibre • Some London carriers are meeting the challenge of providing dark fibre • now 4-5 will do this

  37. Contact Information • http://www.linx.net/ • info@linx.org • Tel +44 1733 705000 • Fax +44 1733 353929

More Related