1 / 28

Force10 Networks - Internet2 Collaboration

Force10 Networks - Internet2 Collaboration. Internet2 Member Meeting May 3, 2005 www.force10networks.com. Today’s Speakers. Debbie Montano Director of Research & Education Alliances dmontano@force10networks.com Joel Goergen Chief Scientist joel@force10networks.com

lynna
Télécharger la présentation

Force10 Networks - Internet2 Collaboration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Force10 Networks - Internet2 Collaboration Internet2 Member Meeting May 3, 2005 www.force10networks.com

  2. Today’s Speakers • Debbie Montano • Director of Research & Education Alliances • dmontano@force10networks.com • Joel Goergen • Chief Scientist • joel@force10networks.com www.force10networks.com

  3. Topics Debbie Montano • Force10 & Internet2 Collaboration • Hybrid Optical Packet Infrastructure (HOPI) Project • Force10 & Internet2 Members / R&E Community • Starlight • Southern Light Rail • Penn State University • UltraScience Net • TeraGrid • Quick Force10 Overview • Corporate Info & Products Joel Goergen • GigE Technology – to 100 GigE! • …

  4. HOPI - Hybrid Optical Packet Infrastructure Fundamental Questions: How will the core Internet architecture evolve? What should the next generation Internet2 network infrastructure be? Examining a hybrid of shared IP packet switching and dynamically provisioned optical lambdas Modeling scaleable next-generation networks Force10 ParticipationInternet2 HOPI Project Internet2 Corporate Partner & HOPI project partner Providing five E600 switch/routers, being deployed in Los Angeles, DC, Chicago, Seattle & New York

  5. Hybrid Optical Packet Infrastructure (HOPI) Node NLR 10 GigE Lambda NLR OpticalTerminal NLR OpticalTerminal OPTICAL Regional Optical Network (RON) OpticalCrossConnect Force10 E600 Switch/Router ControlMeasurementSupport OOB HOPI Node PACKET Abilene Network 10 GigE Backbone Abilene Network Abilene core router GigaPOP GigaPOP

  6. Internet2 HOPI Project

  7. HOPI Nodes with E600s • Washington DC / Virginia - installed • MAX GigaPOP Node, McLean, VA • Los Angeles - installed • CENIC GigaPOP • Chicago • Starlight, 710 N. Lakeshore • Seattle • Pacific Northwest GigaPOP / Pacific Wave • New York • NYSERNET, MANLAN, 32 Avenue of the Americas

  8. UltraScience Net Many More Force10 Switch/Routersin use by R&E Community A few examples…

  9. CLUSTER NODES E1200 connections @ HARNET NCDM CLUSTER NODES DREN 10GE GE As of April 2005 Click on E1200 to see real time MRTG graph

  10. Southern Light Rail • Southern Light Rail • Georgia Tech Non-profit • Participants: Georgia Tech, Georgia State U., Medical College of Georgia, U. of Georgia • Affiliates: Florida LambdaRail, U. of Virginia • Partnership with Southern Crossroads (SOX) GigaPOP to provide services to the research community • NLR access to: • Georgia Research Alliance Universities • Other Southern US universities & research centers • Force10 E1200 Switch/Router • Just installed at 56 Marietta, Atlanta, GA

  11. Southern Light Rail Switch Force10E1200 GT(Georgia Tech) FLR(Florida LambdaRail) UGA MATP(Mid-Atlantic Terascale Partnership) GSU(Georgia State U.) ORNL(Oak Ridge National Lab) 10GE GE

  12. Southern Light Rail Services • Broad range of services from: • optical lambdas • single or multi-Gigabit Ethernet channels • IPv4/v6 routed service. • By controlling the data service all the way down to the optical wavelength, SLR is able to provide dedicated network paths without the congestion and delay variations normally experienced in current IPv4-based networks. • The user's traffic can also be optically isolated from other network users, an important aspect for security sensitive applications.

  13. Penn State University HPC • Penn State University • Information Technology Services • Academic Services & Emerging Technologies • High Performance Computing (HPC) group • Newest Cluster: LION-XO • 80 Computing Nodes, to grow to 160 • 80 Opteron processor-based Sun Microsystems SunFire servers • Each with 8 Gigabits of random access memory (RAM) and 73 Gigabytes of Ultra SCSI disk storage. • Two interconnect switches: • Infinicon Systems Infiniband switch • Force10 E600 Gigabit Ethernet switch. • Ethernet Requirements • Non-blocking throughput • Scalability & High port density • High reliability

  14. Penn State University HPC Force10 E600 Features: • Fully distributed architecture that separates switching, routing and management functionalities • Protected memory and processing power for each function, ensures predictable performance even in the face of denial of service attacks • Built-in redundancy of all key components, including switch fabrics, power supplies and route processor modules • Hitless failover technology ensure that the E600 continues to forward traffic in the event of a failure with zero packet loss.

  15. UltraScience Net UltraScience Net On-Ramp • UltraScience Net • Building an extended-regional lambda-switching testbed • Dedicated fiber/optical infrastructure Atlanta to Chicago and across Tennessee. • 2 x 10 Gig – extended from Chicago to Seattle to Sunnyvale • Connect to NLR in Atlanta and Chicago • Provide an evolving matrix of switching capabilities • Separately fund research projects (e.g., high-performance protocols, control, visualization) that will exercise the network and directly support applications at the host institutions • Force10 E300’s – Bridge on & off UltraScience Net. • 10 GE WAN PHY to 10 GE LAN PHY

  16. UltraScience Net Phase-4 Force10 E300 Force10 E300 Force10 E300 Force10 E300

  17. TeraGrid

  18. TeraGrid • NCSA – National Center for Supercomputing Applications • SDSC – San Diego Supercomputing Center • And several more!

  19. Force10 Networks, Inc Info • Corporate Info • E-Series Switch/Routers • TeraScale & EtherScale (original). • S-Series – new S50 Data Center Switch • 48 Port GE plus 2 Port 10 GE • New 90-port GigE Card WARNING: Marketing Slides! (Only 3)

  20. Force10 Networks, IncLeaders in 10 GbE Switching & Routing • Founded in 1999, Privately Held • First to ship line-rate 10 GbE switching & routing • Pioneered new switch/router architecture providing best-in-class resiliency and density, simplifying network topologies • Customer base spans academic/research, data center, enterprise and service provider • Fastest growing 10 GbE vendor, 92% in 1H04 • April 2005: TeraScale E300 switch/router named winner of the Networking Infrastructure category for eWEEK's Fifth Annual Excellence Awards program.

  21. First Line-Rate 10 GbE Compact- Size System Shipped E300 First 48 GbE x 10 GbEPurpose Built Data Center Switch First >1200 GbEPorts Per Chassis First Line-Rate 672 GbE / 56 – 10 GbE Ports First Public Zero Packet Loss Hitless Failover Demo Nov 2003 First Line-Rate 10 GbEMid-Size SystemShipped E600 First Line-Rate 10 GbE System Shipped E1200 Sept 2004 April 2005 First Line-Rate 336 GbE Ports Demo March 2005 Nov 2003 Apr 2002 Oct 2002 Jan 2002 Force10 Firsts…

  22. TeraScale E-SeriesA New Generation of 10 Gigabit Ethernet • Industry’s best density per chassis • 672 LINE-RATE GbE ports • 56 LINE-RATE 10 GbE ports • Driving down price/port! • FirstLINE-RATE 48-port GbE line card & LINE-RATE 4-port 10 GbE line card • Industry’s best performance • First true Terabit/second switch/router processing 1 billion packets per second • Industry’s most scalable resiliency • ZERO packet loss hitless failover at Terabit data rates • Industry’s most scalable security • No performance degradation with1 million ACLs • Investment protection for 100 GbE E1200 E600 E300

  23. TeraScale E-SeriesChassis-based 10 GbE Switch/Router Family

  24. TeraScale E-SeriesChassis-based 10 GbE Switch/Router Family Highest Density GigE and 10 GigE

  25. E-Series – Layer 2 Switching & Layer 3 Routing, at Line Rate • Line Rate Performance • Line rate, non-blocking forwarding performance on all ports, even with all features enabled simultaneously. • Extended Access Control Lists (ACLs) for packet filtering and policy routing • Multi-field packet loopback and classification for QoS • Packet metering and marking for rate limiting and policing • Congestion control using WRED and WFQ • Full Layer 2 Switching and Layer 3 Routing • BGP, OSPF, IS-IS and RIP routing protocols • Prefix-based distributed forwarding table on every line card • Forwarding table supports up to 256K routes • VLAN redundancy, Rapid Spanning Tree, VLAN Stacking • Multicast with IGMP/PIM-SM, PIM-BSR, MBGP & MSDP.

  26. Technical Details90 port 10/100/1000base-T Line Card • 15 mini RJ-21 connectors, each providing 6 ports • Occupies one chassis slot (E600/E1200) • 1.8:1 lookup oversubscribed GbE ports • Functions as a line-rate card if every alternate connector is used

  27. Internet2 Corporate Partner HOPI Project Partner Key to many advanced networking, research and supercomputing projects Leading in GigE, 10 Gigabit Ethernet and Beyond! Force10 – An Integral Member of the Internet2 Community www.force10networks.com

  28. Thank You dmontano@force10networks.com www.force10networks.com

More Related