1 / 23

OptIPuter Infostructure: East of I-5*

OptIPuter Infostructure: East of I-5* Tom DeFanti, Maxine Brown, Jason Leigh, Oliver Yu, Tom Moher, Bob Grossman, Joe Mambretti, Valerie Taylor, Cees de Laat. “East of I-5” OptIPuter. “East of I-5” OptIPuter is focused on large-data applications using experimental technology

nedaa
Télécharger la présentation

OptIPuter Infostructure: East of I-5*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OptIPuter Infostructure: East of I-5* Tom DeFanti, Maxine Brown, Jason Leigh, Oliver Yu, Tom Moher, Bob Grossman, Joe Mambretti, Valerie Taylor, Cees de Laat

  2. “East of I-5” OptIPuter • “East of I-5” OptIPuter is focused on large-data applications using experimentaltechnology • Experimental means that • It is obtainable and affordable • It works 99% of the time • It is programmable • Brute force is acceptable • Our goal: to make photonics controllable by Grid middleware as soon as possible

  3. What is a Lambda? • A lambda, in networking, is a fully dedicated wavelength of light in an optical network, typically used today for 1-10Gbps. • We are now working with 1Gb dedicated layer2 circuits that act like lambdas • We need enough to schedule and manipulate: up to 40 1Gb “sub” lambdas will be available to the OptIPuter locally, regionally, internationally • We expect 10Gb lambdas to be available to the OptIPuter in a few years; first locally, then regionally, then (inter)nationally

  4. Gross Optical Burst Switching (GOBS) • Move a terabyte or petabyte on a schedule • Roam a 375600 x 375600 pixel remote database • Applications will be able to request dedicated Lambdas using the routed infrastructure • Bypass the routers in between PC PC PC PC PC PC Photonic Switch (Glimmerglass, Calient)

  5. Cluster Visualization 5x3 Grid of 1280x1024 Pixel LCD Panels Driven by 16-PC Cluster Resolution=6400x3072 Pixels, or ~3000x1500 pixels in Autostereo

  6. NTT 4Kx2K Compressed Video from Chicago to Los Angeles • Pre-compressed to 300 Mbps in Chicago using an experimental JPEG 2000 SHD codec • Received in LA at USC Zemeckis Center by an NTT real-time decoder, and fed to NTT's prototype SHD frame-buffer and 8-megapixel full-color D-ILA projector for display on a large screen. • SHD= 4xHDTV or 16xDVD http://www.ntt.co.jp/news/news02e/0211/021113.html

  7. Lambdas for high bandwidth applications: Bypass production network Middleware request for optical pipes Rationale: Lower the cost of transport per packet Application Application High bandwidth application Middleware Middleware Transport Transport Switch Router UvA GbE SURFnet5 ams Router Lambda Switch Router chi GbE CA*net4 Router Lambda Switch Router GbE Router UBC Vancouver Switch

  8. Scale 2-20-200

  9. EVL EVL

  10. UIC Northwestern U 2x10GE 8x1GE 8x1GE 2x10GE 2x10GE Optical Switching Platform Optical Switching Platform Application Cluster Passport 8600 Application Cluster Passport 8600 OPTera Metro 5200 Carrier Hotel StarLight 8x1GE 8x1GE 2x10GE Optical Switching Platform Application Cluster Optical Switching Platform Passport 8600 Passport 8600 OMNInet Chicago 2x10GE LambdaGrid Scale: 2ms • A four-site network in Chicago -- a 10GE service trial • A test bed for all-optical switching and advanced middleware • Partners: SBC, Nortel, iCAIR at Northwestern, EVL, CANARIE, ANL Loop back

  11. 18 pair 4 pair 4 10 pair 4 pair 4 pair 12 pair 12 pair 2 pair 2 pair 2 pair Illinois’ I-WIRE Starlight (NU-Chicago) Argonne Qwest455 N. Cityfront UC Gleacher 450 N. Cityfront UIC UIUC/NCSA McLeodUSA 151/155 N. Michigan Doral Plaza Level(3) 111 N. Canal Illinois Century Network James R. Thompson Ctr City Hall State of IL Bldg UChicago IIT

  12. I-5/PLR

  13. StarLight supports the OptIPuter as • A Production Network 1GigE and 10GigE exchange • An Experimental Network lambda exchange • A Research Network 1GigE and 10GigE MEMS-switched exchange • Host to DTFnet, the TeraGrid’s 4x10Gb T640-based Experimental Network, perhaps for future collaborations • A co-location space with 66 racks for networking and computing and data-management equipment • An OIX with fiber and/or circuits from SBC/Ameritech, Qwest, AT&T, Global Crossing, Looking Glass Networks, Level 3, RCN, Deutsche Telekom/T-Systems, I-WIRE • A facility for links coming from NetherLight, CERN/DataTAG, CA*net4, and proposed from UK-Light and APAN forming Trans-Light

  14. Clusters East of I-5 • Each site (StarLight, UIC, NetherLight) has • Several clusters with dual processors and dual GigE cards • Electronic switching and routing • Optical switching • Specialized Clusters • Computing • Data Mining and Serving • Visualization • Upgrade to Itanium Clusters in progress • Upgrade to 10GigE NICs next year

  15. Lambdas East of I-5, 2003 • Illinois • 16 GigEs to UIC • 8 GigEs to NorthWestern/Evanston • GigEs to ANL, NCSA • Canada • 8 GigEs Chicago to NYC • 8 GigEs Chicago to Seattle • Europe • 16 GigEs Chicago to Amsterdam • 4 GigEs Chicago to CERN • 2 GigEs CERN to Amsterdam

  16. Optical MEMS Switching

  17. Why Optical Switching? • No need to look at every packet when transferring a terabyte of information • 1% the cost of routing • 10% the cost of switching • 64x64 10Gb: • $100,000 O-O-O switched • $1,000,000 O-E-O switched • $10,000,000 O-E-O Routed • Spend the savings on links, computing and collaboration systems instead! • Replaces patch panels; allows rapid reconfiguration of 1 and 10Gb experiments

  18. UIC Procurement of 3D MEMS Switches • Sent out bid request for (2) 64x64 3D MEMS Switches • GlimmerGlass Networks and Calient switches were tested with a small cluster • Using both MMF (with adapters) and SMF NICs • Computer controlled • No problems encountered • Calient won the bid for 2 switches; UIC is upgrading one switch to 128x128 • GlimmerGlass Networks will provide EVL a loaner 64x64 switch

  19. Optical Switches at StarLight and NetherLight Data plane Data plane 8 GigE 8 GigE 128x128 MEMS Optical Switch 64x64 MEMS Optical Switch N GigE 16 GigE 8 GigE “Groomer” at StarLight “Groomer” at NetherLight OC-192 (10Gbps) 16-processor cluster 8-processor cluster N-processor cluster 2 GigE 2 GigE 16 GigE 8 GigE N GigE Switch/Router Switch/Router Control plane Control plane N E T H E R L I G H T A “groomer” is a box that accepts multiple circuits of varying types (e.g., 1GigE, 10GigE) and aggregates and/or disseminates over the 10Gbps transoceanic link. As the amount of transoceanic connectivity increases, we aim to “bandwidth match” the amount of data being sent and/or received by clusters across continents. GigE = Gigabit Ethernet (Gbps connection type)

  20. Hard Infostructure Problems • Internet is not designed for single large-scale users—TCP is not usable for long fat applications • Circuits are not scalable, but neither are router$ • All intelligence has to be on the edge • Tuning compute, data, visualization, networking using clusters to get order of magnitude improvement • Security at 10Gb line speed

  21. Thanks to… • StarLight planning, research, collaborations, and outreach efforts are made possible, in major part, by funding from: • National Science Foundation (NSF) awards ANI-9980480, ANI-9730202, EIA-9802090, EIA-9871058, ANI-0225642, and EIA-0115809 • NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement ACI-9619019 to NCSA • State of Illinois I-WIRE Program, and major UIC cost sharing • Northwestern University for providing space, engineering and management • NSF/CISE/ANIR and DoE/Argonne National Laboratory for StarLight and I-WIRE network engineering and planning leadership • NSF/CISE/ACIR and NCSA/SDSC for DTF/TeraGrid/ETF opportunities • The OMNInet Initiative • UCAID/Abilene for Internet2 and ITN transit; IU for the GlobalNOC • Bill St. Arnaud of CANARIE, Kees Neggers of SURFnet, Olivier Martin of CERN, Michael McRobbie of IU, and Harvey Newman of CalTech

  22. OptIPuter: East of I-5

More Related