1 / 17

OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border

OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border. Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell Max Okumoto Praveen Kumar Mason Katz David Hutches. Physical Campus Connections. O(300) nodes (Storage, Compute, Visualization)

aaronwilson
Télécharger la présentation

OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell Max Okumoto Praveen Kumar Mason Katz David Hutches

  2. Physical Campus Connections • O(300) nodes (Storage, Compute, Visualization) • O(30) switches • How nodes are connected to the network • Optical Core • Site switches (2 x 10GigE) – 48 port GigE • Rack Switches • 24 or 48 port GigE copper

  3. UCSD Packet Test Bed – Year 2/3

  4. Quartzite Extensions • Funded as NSF Major Research Instrumentation Award – Companion to OptIPuter • Observation: Packet-only switch structure not rich enough for OptIPuter research • 300+ OptIPuter Nodes • Viz (3 tiled displays), Storage (48 node, 300 spindles), Compute several hundred (x86, Opteron, Itanium never materialized) • Current UCSD OptIPuter network is channel-bonded GigE. Starting some 10 GigE Deployment (network not “fat” enough for optIPuter research) • Very odd behavior with 10-year old etherchannel technology • Quartzite more closely matches the network capability to the nodes. • At the end of three years: 0.5 Terabits into the Quartzite Switching Core • Quartzite fundamental capability: Build hybrid networks • Packet-switched, Circuit-switched, Wavelength-switched. Hybrid combinations, and reconfigurable

  5. Building Blocks of Quartzite • CWDM is inexpensive ($2500/connected endpoint) • Colored GBIC • Use 8 GigE frequencies centered at 1550nm (+/- j*20nn (j=1,2,3,4)) • + LR 10 GigE at 1320nm. Colored 10GigE XenPAK or SFP may be available by end of proposal • Passive optical multiplexers (Coarse). 9 channel, 1550nm centered GBICs + 1320nm Passband for 10GigE • Comment: Since proposal (January 2004), uncooled DWDM lasers are more readily available. • OOO optical switch (commercial) Glimmerglass. Use as optical patch panel • Custom-built wavelength-selective switch from Lucent • Standard packet switch-router. Want to extend the Chiaro but $$ limit this expansion • 32-port (10GigE) capable 6509 switch ordered with 8 10-GigE ports • Existing single mode fiber plant (already tested and terminated) • 4 pairs/site on the UCSD campus

  6. Quartzite Year 1 Deployment Plans • Deploy OOO Switch • Optical “patch panel” • Add some L2 10 GigE connections • Initial Experience with CWDM

  7. Year 2 Deployment Plans • Add Several CWDM Channels • Deploy Wavelength Switch • Begin Significant 10 GigE Buildout • Packet Switch • 10 GigE into some endpoint nodes

  8. Final Planned Configuration • >50 endpoints connected at 10 GigE • >= 32 Packet switched • >= 32 switched wavelengths • >= 300 Connected endpoint

  9. Glimmerglass Switch – Now Deployed

  10. A Whole Host of Real Issues • Network: • Where are nodes connected? • How are switches interconnected • Channel bond? 10-GigE? • Are there wavelengths to be allocated? How? • Can I build a VLAN that traverses specific physical links • Can have a library of different network configs for the testbed • Node Software: • Can I have root on nodes? How do I specify a particular OS Image on a node? • What nodes can I have? For how long? How do I determine that I am (or not) in competition for resources on the testbed • Grid Software: • What Grid software is available as base configuration? What should be • Performance • Why does my long-distance channel give me asymmetric performance

  11. Network Inventory Management • Schema designed to support management of both physical and virtual topologies. Physical topology is the network with connections as it is. Virtual topology shows the actual network flow paths. • Physical Topology Schema components • Switch model ( Mfr, Model name) • Switch unique information ( MAC, IP, Hostname etc ) • Host model ( Processor, Memory etc ) • Hosts unique information ( MAC, IP, Hostname etc) • Connections ( Physical connection among switch or host entities ) • Trunk based connections ( Ports participating in an physical trunk ) • Connections are the basic primitives which allow us to build, visualize and navigate the topology. Any information corresponding to the network can be had by an join of connections and other tables in the schema. PHYSICAL TOPOLOGY GRAPH

  12. Optiputer Network Inventory Management LOGICAL TOPOLOGY (Single VLAN) GRAPH • Logical topologyadds an VLAN table to the physical topology tables. • VLAN composed of trunks. • Each Trunk can be a single/multiple port to port connection between same set of switches • Schema supports retaining VLAN id when modifying trunks and vice-versa. • Schema contains network information needed to program the network to construct virtual topologies. VLAN table in conjunction with Connections and Trunks can be used to create the needed setup. User can specify inputs in terms of IP address or Host names, which will be mapped to the tables above through host and switch tables. • Database implemented in MySQL with Python wrappers to insert data and generate output using GraphViz. • Work in progress to automate data collection.

  13. OS and Software Integrations • Rocks configurations on endpoints allow us to build libraries of OS configs. • Rolls allow programmatic extensions to a complete cluster installation. • E.g., Grid, Scheduler, Kernel Rolls allow overwrite/extension • Visualization Roll captures tiled-display wall software

  14. Extensions to Wider Area

  15. Southern California CalREN-XD Build Out

  16. StarLight Chicago UIC EVL U Amsterdam NLR PNWGP Seattle NU NetherLight Amsterdam NLR NASA Ames NASA Goddard NASA JPL NLR NLR 2 Level 3 2 ISI 2 SDSU CENIC Los Angeles GigaPOP CalREN-XD 8 UCI CICESE CENIC/Abilene Shared Network UCSD 8 via CUDI CENIC San Diego GigaPOP Expanding the OptIPuter LambdaGrid 1 GE Lambda 10 GE Lambda

  17. Southern California CalREN-HPR

More Related