1 / 45

Control Plane Issues for NRENs

Control Plane Issues for NRENs. Gigi Karmous-Edwards gigi@mcnc.org May 22, 2006 EARNEST Foresight Workshop. Outline. E-Science Research Challenges Examples of Infrastructure for Network Research Experimentation NLR GENI GLIF International Collaboration Experiences w/

jayj
Télécharger la présentation

Control Plane Issues for NRENs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Control Plane Issues for NRENs Gigi Karmous-Edwards gigi@mcnc.org May 22, 2006 EARNEST Foresight Workshop

  2. Outline • E-Science • Research Challenges • Examples of Infrastructure for Network Research Experimentation • NLR • GENI • GLIF • International Collaboration Experiences w/ EnLIGHTened Computing • Conclusions

  3. Motivation

  4. E-science • E-science: global, large scale scientific collaborations enabled through distributed computational and communication infrastructure • Combines scientific instruments and sensors, distributed data archives, computing resources and visualization to solve complex scientific problems • In physics, molecular biology, environmental, Health, Entertainment, etc. • E-Science Definitions Oxford E-Science Center • The Department of Trade and Industry defines e-Science as: "Science increasingly performed through distributed global collaborations enabled by the Internet, using very large data collections, terascale computing resources and high performance visualizations" • This essentially means that many areas of science currently using computing resources as part of their research, will soon have the ability to utilize more powerful computing resources across a new infrastructure commonly described as the 'grid'. Scientists will have access to very large data sets and perform real time experiments on this data. This will ultimately lead to scientists tackling the 'big scientific questions' hitherto unexplorable.

  5. E-Science and Grid computing • Grid computing: main enabler of E-science • Grid is concerned with "coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations." (Ian Foster) • Migration of the E-science community towards Grid Computing emerged from three converging trends; i) Advances in optical networking technologies. Widespread deployment of the fiber infrastructure has led to low-cost, high-capacity optical connections. ii) Affordability of the required computational resources through sharing. The increasing demand of computational power and bandwidth by the new e-science applications is proving to be a financially difficult and nearly impossible task unless resources are shared across research institutions. iii) Need for interdisciplinary research. The growing complexity of scientific problems is driving the need for increasing numbers of scientists from diverse disciplines and locations to work together in order to achieve breakthrough results.

  6. Developing a Global E-science Laboratory (GEL) • Korea’s HVEM • One of a kind in the world - • Provide global access to unique instruments for the purpose of advancing science for humanity • WEB service interface • High capacity optical network for output • The tasks that the HVEM users can perform: • Requesting the general operations of the goniometer, TEM, and CCD camera • Viewing the real-time video from the CCD camera • Accessing or manipulating the 2-D or 3-D images • Generating the workflow specification and requesting the workflow to be executed • Searching the images or video files, papers, and experiments in the databases or storages Hyuck Han, Hyungsoo Jung, Heon Y. Yeom, Hee S. Kweon, and Jysoo Lee ”HVEM Grid: Experiences in Constructing an Electron Microscopy Grid”

  7. E-Science and Grid Opportunity • Governments world-wide promoting global E-science research programs • New Zealand, $43 million to establish the Advanced Network • United States: from Optiputer over Cyber-infrastructure to DDDAS , and DoE ($60M) • Canada: CA*net4 and i-Infrastructure • Australia: e-Research Initiative (Started Oct 2004) • Europe: 7th framework (€1B/year), GEANT2 (€93M), Netherlight, Geodise, UK E-science, etc. • Private sector setting a strong pace • Google building their own L1/L2 network • Multi $B investments from large businesses: IBM, HP, Intel,… • Smaller businesses: Grid-based enterprise applications, data centers solution, commercialization of Globus toolkit,…

  8. Network Requirements

  9. New Demands on Networks Emerging High-End applications • High bandwidth connectivity between supercomputers (teraflops+) • Large file transfers, over long distances - rethink TCP (FAST) • Applications/end-users/sensors/instruments requesting optical networking resources host-to-host connections • Determinism (QoS), jitter and latency requirements (difficult to do with today’s Internet, and may be unfair to transfer terabyte of data to other apps) • Coordination of network with computational and non-computational resources (CPU, databases, sensors, instruments )

  10. New Demands on Networks Emerging High-End applications (cont’d) • Exchange data with sensors via potentially other physical resources. wireless • Destination may not be known initially rather only a service is requested from source and the destination is derived from the request information • Mechanisms for retrieving near-real-time information about network resources and network states • Mechanism for both advance and fast on-the-fly reservation and set-up • Low latency on-demand connection requests • Policy and security enforcement in open scientific environments

  11. Advances in Optical Technologies (How do we take advantage?) • 1000 channels per fiber….. Experimentation with 160G per channel • Dark Fiber every where …. Paradigm shift on ownership • Fiber is much cheaper…US Headlines: Google buys Fiber • All-optical switches are getting faster and smaller (ns switch reconfiguration) • Control Plane protocols, SOA, continue to mature - but should be revisited • Layer one Optical switches relatively cheaper than other technologies • Electronic Dispersion Compensation • Fiber, optical impairments control, and transceiver technology continue to advance while reducing prices

  12. Research Challenges

  13. Research Challenges • Coordination of resources per request for both on-the-fly and advanced reservations - Network resources is an integral part of the application’s request for shared resources • Advanced reservation in distributed form - Borrow from ATM research • Optimization of Resource Allocation • Interdomain across Global Grid networks - network interdomain protocols, policies (management plane and control plane, Grid … WEB services ) • Dynamic and Adaptive on-demand use of end-to-end networking resources (requires near real-time feedback loop)- Identification of functions and interactions between the control plane, management plane, and Grid middleware

  14. Research Challenges (cont’d) • Monitoring information of resources - i) identification of information, ii) abstraction of information, and iii) frequency of updates • Software algorithms to support multiple classes of software including highly-dynamic, workflow engines, data-driven and event-driven applications • Rethinking the Behavioral Control of Networks • Control/management planes interacting with middleware • Centralized vs. distributed functionality

  15. Applications Edge Routers Workflow Engines Application Abstraction Layer (API) Translate app request to policy Resource Manager Co-Scheduler Policy Abstraction Feedback Loop Resource Monitoring Resource Allocation • Discovery • Performance • Policy For SLA Monitoring Policy

  16. Control Plane vs. Management Plane Distributed Control: Control Plane • Infrastructure and distributed intelligence that controls the establishment and maintenance of connections in the network, including protocols and mechanisms to disseminate this information; and algorithms for engineering an optimal path between end points. • Centralized Control: Management Plane • Management plane mechanisms rely on client/ server model, usually involving one or more management applications (structured hierarchically) communicating to each network element in its domain via a management protocol, (i.e., SNMP, Tl1, XML, etc).

  17. Centralized vs. Distributed… Key areas for Today’s Control Plane are: Provisioning Recovery Network Behavioral Control Network Management (Hierarchical ) Network Management Migration NE NE NE NE NE NE Protocols Protocols Centralized (vertical) Distributed (Horizontal)

  18. Control Plane Functions • Routing - Intra-domain and Inter-domain 1) automatic topology and resource discovery 2) path computation (How do we use the infrastructure) • Signaling - standard communications protocols between network elements for the establishment and maintenance of connections • Neighbor discovery - NE sharing of details of connectivity to all its neighbors (very powerful tool) • Local resource management - accounting of local available resources

  19. Centralized vs. Distributed Behavioral Control of Networks • Re-thinking control functionality in terms of (centralized or Distributed): • Information exchanged • Algorithms for path computation and recovery (CPU power vs. fast reaction time) • Discovery and advertising of resources • Scalability (frequency and amount of data exchange) • Timing (reaction time of events) • Interdomain interactions (Is BGP the solution or should it be centralized?) • Policy enabled (where is it residing vs. executing)

  20. Examples of Infrastructure for Network Research Experimentation • NLR • GENI • GLIF

  21. NLRwww.nlr.net

  22. courtesy of Tom West, CEO NLR

  23. National LambdaRail Mission To advance the research, clinical and educational goals of members and other institutions by establishing and maintaining a nationwide advanced network infrastructure. courtesy of Tom West, CEO NLR

  24. National LambdaRail Goals • Support experimental and production networks • Foster networking research • Promote next-generation applications • Facilitateinterconnectivity among high-performance research and education networks courtesy of Tom West, CEO NLR

  25. GENIwww.geni.net

  26. Sensor Network FederatedInternational Facility Edge Site Mobile Wireless Network Facility Design: Key ConceptsGENI (Global Environment Network Innovations) Slicing, Virtualization, Programmability Slide from: Guru Parulkar, CISE, NSF

  27. GENI Facility Goals Enable exploration of new network architectures, mechanisms, and distributed system capabilities A shared facility that allows • Concurrent exploration of a broad range of experimental networks and distributed services • Interconnection among experimental networks & the commodity Internet • Users and applications to “opt-in” • Observation, measurement, and recording of outcomes enabled Develop stronger scientific base Slide from: Guru Parulkar, CISE, NSF

  28. GLIFwww.glif.is

  29. What is GLIF? GLIF is a collaboration of institutions, organizations, consortia and country National Research and Education Networks (NRENs) who voluntarily shareoptical networking resources and expertise for the advancement of scientific collaboration and discovery. GLIF's mission : to create and sustain a Global Facility that supports leading-edge capabilities based on new and emerging technologies and paradigms related to advanced optical networking to enable high-performance applications and services.

  30. Global Lambda Integrated Facility Visualization courtesy of Bob Patterson, NCSA. www.glif.is

  31. GLIF Automation? WEB Services ? Client B Client A Grid middleware Grid middleware Network Management Network Management Network Management NREN Control Plane NREN Control Plane NREN Control Plane

  32. GLIF Control Plane and Grid Middleware Integration wg Mission: To agree on the interfaces and protocols to automate and use the control planes of the contributed Lambda resources to help users on a global scale access optical resources on-demand or pre scheduled.

  33. GLIF Control Plane and Grid Middleware Integration wg • Work with GLIF Tech group top establish what are GLIF resources (GOLEs) • Defined Network Elements in RDF • Software that reads RDF description • Need to write to Google MAP APIs to draw resources on a global bases • Provide algorithm to compute path from broker information • Establish WEB services for connection services Thanks to Jereon Van Der Ham

  34. EnLIGHTened ComputingInternational CollaborationExperiences

  35. Chicago San Diego L.A. Raleigh Baton Rouge EnLIGHTened Computing connectivity diagram with partners To Asia To Canada To Europe SEA POR BOI CAVE wave EnLIGHTened wave (Cisco/NLR) PIT OGD DEN CHI KAN CLE SVL WDC Cisco/UltraLight wave LONI wave TUL DAL • International • Partners • LUCIFER - EC • G-Lambda - Japan • GLIF • Members: • MCNC GCNS • LSU CCT • NCSU • (Subcontract) RENCI HOU • Official Partners: • AT&T Research • SURA • NRL • Cisco Systems • Calient Networks • IBM • NSF Project Partners • OptIPuter • UltraLight • WAN-in-LAB • DRAGON • Cheetah

  36. Enlightened/LUCIFER Sister Projects - Similar Goals Testbeds • 3 EU NRENs are partners + 3 national test-beds + 3 research networks in US and Canada + 5 expressed interest through LoIs • These community representatives are willing to monitor project progress, collaborate and exploit its results

  37. Japan’s G-Lambda research collaboration Slide: Courtesy of Michiaki Hayashi KDDI R&D Laboratories Inc.

  38. Japan’s G-Lambda research collaboration Slide: Courtesy of Michiaki Hayashi KDDI R&D Laboratories Inc.

  39. Two interfaces emerging For Network resources For Compute resources GL Request EL Request GL GRS GNS-WSI HARC NIF HARC Acceptor GL-CRM I/F GNS-WSI Wrapper HARC NIF HARC CIF GNS-WSI GL-CRM I/F HARC CIF HARC NIF GNS-WSI HARC-CIF Wrapper HARC CIF GL-CRM I/F HARC CRM HARC NRM GL CRM GL NRM network resource compute resource compute resource network resource EL Grid GL Grid Slide: Courtesy of Lina Battestilli

  40. Conclusions

  41. Further Reference IEEE Communications Magazine Feature Topic Optical Control Plane for Grid Networks: Opportunities, Challenges and the Vision Guest Editors: Admela Jukan and Gigi Karmous-Edwards March, 2006 An Optical Control Plane for The Grid Community Vol.44 No.3 March 2006

  42. A Book written by the Community Coming soon

  43. Conclusions • Control Plane research is vital to meeting future generation NRENs - with a strong focus on algorithms to meet the needs of driving applications • Dynamic reconfigurability of L1/2 is essential to bring down cost and meet application requirements. • Paradigm Shifts: i) ownership and control of network infrastructure, ii) network resources are treated as an integral Grid resource. - affect Interdomain policies • The Research networks are taking these bold steps on GLIF, testbed infrastructures… apply lessons learned to production quickly. • International Collaboration is a very Key ingredient for the future of Scientific discovery - The Optical network plays the most critical role in achieving this!

  44. Acknowledgments The Enlightened Team Yufeng Xin, Steve Thorpe, Bonnie Hurst, Lina Battestilli, Mark Johnson , John Moore, Ed Seidel, Gabriele Allen, Seung Jong (Jay) Park , Jon Maclaren, Andrei Hutanu, Lonnie Leger, Lavanya Ramakrishnan, Joel Dunn, Savera Tanwir, Harry Perros, Javad Boroumand, Russ Gyurek, Wane Clark, Kevin McGrattan, Rick Schlichting, John Strand, Matti Hiltunen, Gary Crane, Hank Dardy, Olivier Jerphagnon, Ron Mackey, John Bowers,Carla Hunt, Andrew Mabe, Gigi Karmous-Edwards

  45. Thank You! Gigi Karmous-Edwards gigi@mcnc.org May 22, 2006 EARNEST Foresight Workshop Berlin

More Related