1 / 19

CC-NIE workshop : Campus Infrastructure GENI racks

CC-NIE workshop : Campus Infrastructure GENI racks. Heidi Picher Dempsey January 7, 2013 www.geni.net. Outline. GENI Racks and Connections Campus Requirements GENI Rack Installation and Support. GENI Racks and Connections.

velvet
Télécharger la présentation

CC-NIE workshop : Campus Infrastructure GENI racks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CC-NIE workshop : Campus Infrastructure GENI racks Heidi Picher Dempsey January 7, 2013 www.geni.net

  2. Outline • GENI Racks and Connections • Campus Requirements • GENI Rack Installation and Support

  3. GENI Racks and Connections • Racks provide reservable, sliceable compute and network resources using Aggregate Managers (AM). • Comply with GENI AM API • Support GENI RSpec v3 • Support federation with existing Slice Authorities (GENI Project Office (GPO), ProtoGENI (University of Utah), and PlanetLab Central (Princeton University) for access now

  4. Racks and Connections (cont.) GMOC • Racks are GENI Aggregates • GENI MetaOperations (Indiana University) provides support, monitoring and escalation • Internet2 and NLR provide core data planeresources that experimenters can control • Regionals provide more network resources that experimenters can control (CENIC, GpENI, KanREN, MOXI, MAX, NYSERNET, SOX, UEN) • GENI network resources interconnect and coexist with with other research networks (e.g. StarLight) Identity Provider GENI Clearinghouse Experimenter Tools Aggregates* * Includes GENI racks and (if desired) designated campus resources

  5. Core Connections: Layer 2Now Multiple 1G and 10G connections with VLANs connecting experimenter nodes Campus access to Internet2 via ION/DYNES or direct connection (existing or AL2S) Campus access to NLR via FrameNet or direct connection GENI AL2S GENI PG GENI PG GENI PG GENI AL2S GENI PG GENI PG GENI AL2S NLR (5-8 nodes) Peering GENI PG GENI AL2S I2 map with GPO edits

  6. Core Connections:Internet 2 AL2S

  7. Core Connections Coming Soon (starting 2013) Full GENI implementation on AL2S Peering with multiple SDN networks possible (e.g. NLR, Southeast Network Access Point) Campus access via stitching or direct connection to AL2S (see CC-NIE architecture slides) Support experimenter control of nodes or access to AL2S production services

  8. GENI Rack Campus Requirements • Provide space, power, security (as with other campus IT resources) • Provide at least 1Gbps OpenFlow/SDN path from rack to campus boundary • Connect campus resources to GENI rack for faculty/experimenter use • Operate with up-to-date GENI-specified software (e.g. AM API, OpenStack) • Provide no-cost access to rack resources for GENI authorized users at other campuses • Provide points of contact for GENI response team (see http://groups.geni.net/geni/attachment/wiki/ComprehensiveSecurityPgm/Aggregate Provider Agreement v3.pdf )

  9. Installation and Support:GENI Rack Teams • InstaGENI: University of Utah (software and engineering), partnered with HP Labs (commercial hardware/firmware), Northwestern University (deployment coordination and engineering) and Princeton (PlanetLab integration). • ExoGENI: RENCI and Duke (software and engineering), IBM (commercial hardware/firmware and on-site installation) • GENI also provides OpenFlow developer support for both teams via Open Network Labs • All teams support open source development and share via their project and GENI wikis and repositories

  10. GENI Rack Installation and Support Flow

  11. Support: GPO Testing • Acceptance Tests for experimenter, administrator, and monitoring functions still underway • ExoGENI experimenter functions good, shared monitoring and administration in progress • InstaGENI network and administration tests delayed by delivery logistics, monitoring just added • Confirmation Tests for each installation • Interoperability testing for GENI AM API and RSPECs with Omni command-line tool releases • Latest Status http://groups.geni.net/geni/wiki/GENIRacksHome/ExogeniRacks/AcceptanceTestStatus http://groups.geni.net/geni/wiki/GENIRacksHome/InstageniRacks/AcceptanceTestStatus http://groups.geni.net/geni/wiki/GENIRacksHome/ExogeniRacks/ConfirmationTestStatus http://groups.geni.net/geni/wiki/GENIRacksHome/InstageniRacks/ConfirmationTestStatus

  12. Support: Access and Usage Policies • GENI Slice Authorities currently used for control plane access to GENI rack, clearinghouse in progress -- more on this in Marshall’s talk • Campus sets policies for GENI rack connections to campus data plane before installation • Rack teams, GPO and campus staff configure security policy control points for data plane during installation and test (e.g. in campus, GENI rack and Science DMZ switches/routers) • Campus staff uses FOAM (with or without automated approval) for per-service operations control of GENI rack OpenFlow connections to campus data plane (no admin needed for others).

  13. GENI Rack Campuses Fundsin hand • 43 racks planned this year • Track on GENI wiki Needs funding Oct. 24, 2012

  14. GENI Rack Spiral 5 Installations • 43 GENI-sponsored racks with integrated OpenFlow, compute nodes, and some support for dynamic VLANS deploying this year • More campuses adding racks independently (e.g. CC-NIE, commercial projects) • Software updates expected for each rack, will retest to verify • Schedules subject to change based on campus readiness – looking for early adopter interest from this workshop

  15. ExoGENI Draft Deployments DRAFT ONLY Subject to Change

  16. InstaGENI Draft Deployments DRAFT ONLY Subject to Change

  17. Current Support • Help for campuses and experimenters • GMOC helpdesk (call, ticket, or email 24x7x365) http://gmoc.grnoc.iu.edu/gmoc/index/support.html • help@geni.net mailing list • IRC/chat (informal)http://groups.geni.net/geni/wiki/HowTo/ConnectToGENIChatRoom • GMOC support for racks and OpenFlow campus infrastructure • Monitoring and status for GENI sites and racks http://gmoc-db.grnoc.iu.edu https://gmoc-db.grnoc.iu.edu/protected/requires admin password • Scheduled/unscheduled outage reporting and calendars • Emergency Stop • Escalation, tracking, some troubleshooting for reported problems • Draft workflows • Security related support (Legal, Law Enforcement and Regulatory Reps) http://groups.geni.net/geni/attachment/wiki/ComprehensiveSecurityPgm/LLR Responsibilities of GENI.pdf

  18. Current GENI Monitoring Examples Virtual Machines on Racks FOAM aggregates Slivers on Racks * Open Source monitoring client available in Python * Updated monitoring software running on all racks, backbones, and most OpenFlow aggregates * Monitoring uses URNs for resource names for better interoperabilty * Format for InstaGENI and ExoGENI reported data is similar

  19. Current Vendor Experience Examples • Vendors often don’t implement full OpenFlow spec • Hybrid mode support varies significantly • The Quilt RFP for SDN vendors http://www.thequilt.net/index.php/quilt-news/231-quilt-announces-openflow-switch-authorized-quilt-providers

More Related